Important: Red Hat OpenShift Data Foundation 4.10.0 enhancement, security & bug fix update

Synopsis

Important: Red Hat OpenShift Data Foundation 4.10.0 enhancement, security & bug fix update

Type/Severity

Security Advisory: Important

Topic

Updated images that include numerous enhancements, security, and bug fixes
are now available for Red Hat OpenShift Data Foundation 4.10.0 on Red Hat
Enterprise Linux 8.

Red Hat Product Security has rated this update as having a security impact
of Important. A Common Vulnerability Scoring System (CVSS) base score, which
gives a detailed severity rating, is available for each vulnerability from
the CVE link(s) in the References section.

Description

Red Hat OpenShift Data Foundation is software-defined storage integrated with and optimized for the Red Hat OpenShift Container Platform. Red Hat OpenShift Data Foundation is a highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Data Foundation provisions a multicloud data management service with an S3 compatible API.

Security Fix(es):

  • golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)
  • golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
  • golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
  • golang: net/http/httputil: panic due to racy read of persistConn after handler panic (CVE-2021-36221)
  • golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet (CVE-2021-29923)
  • golang: crypto/tls: certificate of wrong type is causing TLS client to panic (CVE-2021-34558)

Bug Fix(es):
These updated packages include numerous enhancements and bug fixes. Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat OpenShift Data Foundation Release Notes for information on the most significant of these changes:

https://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.10/html/4.10_release_notes/index

All Red Hat OpenShift Data Foundation users are advised to upgrade to these updated packages, which provide numerous bug fixes and enhancements.

or more details about the security issue(s), including the impact, a CVSS
score, acknowledgments, and other related information refer to the CVE
page(s) listed in the References section.

Solution

For details on how to apply this update, refer to:

https://access.redhat.com/articles/11258

Affected Products

  • Red Hat OpenShift Data Foundation 4 x86_64
  • Red Hat OpenShift Data Foundation for IBM Power, little endian 4 ppc64le
  • Red Hat OpenShift Data Foundation for IBM Z and LinuxONE 4 s390x

Fixes

  • BZ - 1898988 - [RFE] OCS CephFS External Mode Multi-tenancy. Add cephfs subvolumegroup and path= caps per cluster.
  • BZ - 1954708 - [GSS][RFE] Restrict Noobaa from creating public endpoints for Azure Private Cluster
  • BZ - 1956418 - [GSS][RFE] Automatic space reclaimation for RBD
  • BZ - 1970123 - [GSS] [Azure] NooBaa insecure StorageAccount does not allow for TLS 1.2
  • BZ - 1972190 - Attempt to remove pv-pool based noobaa-default-backing-store fails and makes this pool stuck in Rejected state
  • BZ - 1974344 - critical ClusterObjectStoreState alert firing after installation of arbiter storage cluster, likely because ceph object user for cephobjectstore fails to be created, when storagecluster is reinstalled
  • BZ - 1981341 - Changing a namespacestore's targetBucket field doesn't check whether the target bucket actually exists
  • BZ - 1981694 - Restrict Noobaa from creating public endpoints for IBM ROKS Private cluster
  • BZ - 1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic
  • BZ - 1991462 - helper pod runs with root privileges during Must-gather collection(affects ODF Managed Services)
  • BZ - 1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet
  • BZ - 1995656 - CVE-2021-36221 golang: net/http/httputil: panic due to racy read of persistConn after handler panic
  • BZ - 1996830 - OCS external mode should allow specifying names for all Ceph auth principals
  • BZ - 1996833 - ceph-external-cluster-details-exporter.py should have a read-only mode
  • BZ - 1999689 - Integrate upgrade testing from ocs-ci to the acceptance job for final builds before important milestones
  • BZ - 1999952 - Automate the creation of cephobjectstoreuser for obc metrics collector
  • BZ - 2003532 - [Tracker for RHEL BZ #2008825] Node upgrade failed due to "expected target osImageURL" MCD error
  • BZ - 2005801 - [KMS] Tenant config does not override backendpath if the key is specified in UPPER_CASE
  • BZ - 2005919 - [DR] [Tracker for BZ #2008587] when Relocate action is performed and the Application is deleted completely rbd image is not getting deleted on secondary site
  • BZ - 2021313 - [GSS] Cannot delete pool
  • BZ - 2022424 - System capacity card shows infinity % as used capacity.
  • BZ - 2022693 - [RFE] ODF health should reflect the health of Ceph + NooBaa
  • BZ - 2024107 - Retrieval of cached objects with `s3 sync` after change in object size in underlying storage results in an InvalidRange error
  • BZ - 2024545 - Overprovision Level Policy Control doesn't support custom storageclass
  • BZ - 2026007 - Use ceph 'osd safe-to-destroy' feature in OSD purge job
  • BZ - 2027666 - [DR] CephBlockPool resources reports wrong mirroringStatus
  • BZ - 2027826 - OSD Removal template needs to expose option to force remove the OSD
  • BZ - 2028559 - OBC stuck on pending post node failure recovery
  • BZ - 2029413 - [DR] Dummy image size is same as the size of image for which it was created
  • BZ - 2030602 - MCG not reporting standardized metric correctly for usage
  • BZ - 2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic
  • BZ - 2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache
  • BZ - 2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error
  • BZ - 2030839 - Concecutive dashes in OBC name
  • BZ - 2031023 - "dbStorageClassName" goes missing in storage cluster yaml for mcg standalone mode
  • BZ - 2031705 - [GSS] OBC is not visible by admin of a Project on Console
  • BZ - 2032404 - After a node restart, the RGW pod is stuck in a CrashLoopBackOff state
  • BZ - 2032412 - [DR] After Failback and PVC deletion the rbd images are left in trash
  • BZ - 2032656 - Rook not recovering when deleting osd deployment with kms encryption
  • BZ - 2032969 - No RBD mirroring daemon down alert when daemon is down
  • BZ - 2032984 - After creating a new SC it redirects to 404 error page instead of the "StorageSystems" page
  • BZ - 2033251 - Fix ODF 4.9 compatibility with OCP 4.10
  • BZ - 2034003 - NooBaa endpoint pod Terminated before new one comes in Running state after editing the configmap
  • BZ - 2034805 - upgrade not started for ODF 4.10
  • BZ - 2034904 - OCS operator version differ in CLI commands.
  • BZ - 2035774 - Must Gather, Ceph files do not exist on MG directory
  • BZ - 2035995 - [GSS] odf-operator-controller-manager is in CLBO with OOM kill while upgrading OCS-4.8 to ODF-4.9
  • BZ - 2036018 - ROOK_CSI_* overrides missing from the CSV in 4.10
  • BZ - 2036211 - [GSS] noobaa-endpoint becomes CrashLoopBackOff when uploading metrics data to bucket
  • BZ - 2037279 - [Azure] OSDs go into CLBO state while mounting an RBD PVC
  • BZ - 2037318 - Helper Pod doesn't come up for MCG only must-gather
  • BZ - 2037497 - Concecutive dashes in OBC name
  • BZ - 2038884 - noobaa-operator is stuck in a CrashLoopBackOff (r.OBC is nil, invalid memory address or nil pointer dereference)
  • BZ - 2039240 - [KMS] Deployment of ODF cluster fails when cluster wide encryption is enabled using service account for KMS auth
  • BZ - 2040682 - [GSS] Complete multipart upload operation fails with error ' Cannot read property 'sort' of undefined'
  • BZ - 2041507 - Missing add modal for action "add capacity" in UI .
  • BZ - 2042866 - must gather does not collect the yaml or describe output of the subscription
  • BZ - 2043017 - "CSI Addons" operator is not hidden in OperatorHub and Installed Operators page
  • BZ - 2043028 - the CSI-Addons sidecar is not automatically deployed, requires enabling in Rook ConfigMap
  • BZ - 2043406 - ReclaimSpaceJob status showing "reclaimedSpace" value as "0"
  • BZ - 2043513 - [Tracker for Ceph BZ 2044836] mon is in CLBO after upgrading to 4.10-113
  • BZ - 2044447 - ODF 4.9 deployment fails when deployed using the ODF managed service deployer (ocs-osd-deployer)
  • BZ - 2044823 - Update CSI sidecars to the latest release for 4.10
  • BZ - 2045084 - [SNO] controller-manager state is CreateContainerError
  • BZ - 2046186 - A TODO text block in the API browser
  • BZ - 2046254 - Topolvm-controller is failing to pull image
  • BZ - 2046677 - Reclaimspacecronjob is not created after adding the annotation reclaimspace.csiaddons.openshift.io/schedule in PVC
  • BZ - 2046766 - [IBM Z]: csi-rbdplugin pods failed to come up due to ImagePullBackOff from the "csiaddons" registry
  • BZ - 2046887 - use KMS_PROVIDER name for IBM key protect service as "ibmkeyprotect"
  • BZ - 2047162 - ReclaimSpaceJob failing, fstrim is executed on a non-existing mountpoint/directory
  • BZ - 2047201 - Add HPCS secret name to Ceph and NooBaa CR
  • BZ - 2047562 - CSI Sidecar containers do not start
  • BZ - 2047565 - PVC snapshot creation is not successful
  • BZ - 2047625 - Dockerfile changes for topolvm
  • BZ - 2047632 - mcg-operator failed to install on 4.10.0-126
  • BZ - 2047642 - Replace alpine/openssl image in the downstream build
  • BZ - 2048107 - vgmanager cannot list block devices on the node
  • BZ - 2048370 - CSI-Addons controller makes node reclaimspace request even when the PVC is not mounted to any pod.
  • BZ - 2048458 - python exporter script 'ceph-external-cluster-details-exporter.py' error cap mon does not match on ODF 4.10
  • BZ - 2049029 - MCG admission control webhooks don't work
  • BZ - 2049075 - openshift-storage namespace is stuck in terminating state during uninstall due to remaining csi-addons resources
  • BZ - 2049081 - ReclaimSpaceJob is failing for RBD RWX PVC
  • BZ - 2049424 - ODF Provider/Consumer mode - backport for missing content
  • BZ - 2049509 - ocs operator stuck on CrashLoopBackOff while installing with KMS
  • BZ - 2049718 - provider/consumer Mode: rook-ceph-csi-config configmap needs to be updated with the relevant subvolumegroup information
  • BZ - 2049727 - [DR] Mirror Peer stuck in ExchangingSecret State
  • BZ - 2049771 - We can see 2 ODF Multicluster Orchestrator operators in operator hub page
  • BZ - 2049790 - Add error handling for GetCurrentStorageClusterRef
  • BZ - 2050056 - [GSS][KMS] Tenant configmap does not override vault namespace
  • BZ - 2050142 - [DR] MCO operator is setting s3region as empty inside s3storeprofiles
  • BZ - 2050402 - Ramen doesn't generate correct VRG spec in sync mode
  • BZ - 2050483 - [DR]post creating MirrorPeer, the ramen config map had invalid values
  • BZ - 2051249 - [GSS]noobaa-db-pg-0 Pod stuck CrashLoopBackOff state
  • BZ - 2051406 - Need commit hash in package json and logs
  • BZ - 2051599 - Use AAD while unwrapping the KEY from HPCS/Key Protect KMS
  • BZ - 2051913 - [KMS] Skip SC creation for vault SA based kms encryption
  • BZ - 2052027 - cephfs: rados omap leak after deletesnapshot
  • BZ - 2052438 - [KMS] Storagecluster is in progressing state due to failed RGW deployment when using cluster wide encryption with kubernetes auth method
  • BZ - 2052937 - [KMS] Auto-detection of KV version fails when using Vault namespaces
  • BZ - 2052996 - ODF deployment fails using RHCS in external mode due to cephobjectstoreuser
  • BZ - 2053156 - Avoid worldwide permission mode setting at time of nodestage of CephFS share
  • BZ - 2053517 - [DR] Applications are not getting DR protected
  • BZ - 2054147 - Provider/Consumer: Provider API server crashloopbackoff
  • BZ - 2054755 - Update storagecluster API in the odf-operator
  • BZ - 2061251 - [GSS]Object Upload failed with Unhandled exception when not using parameter "UseChunkEncoding = false" in s3 client in ODF 4.9